Bounds for the Tracking Error of First-Order Online Optimization Methods

نویسندگان

چکیده

This paper investigates online algorithms for smooth time-varying optimization problems, focusing first on methods with constant step-size, momentum, and extrapolation-length. Assuming strong convexity, precise results the tracking iterate error (the limit supremum of norm difference between optimal solution iterates) gradient descent are derived. The then considers a general first-order framework, where universal lower bound is established. Furthermore, method using “long-steps” proposed shown to achieve up fixed constant. compared specific examples. Finally, analyzes effect regularization when cost not strongly convex. With regularization, it possible non-regret bound. ends by testing accelerated regularized synthetic least-squares logistic regression respectively.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

the effects of error correction methods on pronunciation accuracy

هدف از انجام این تحقیق مشخص کردن موثرترین متد اصلاح خطا بر روی دقت آهنگ و تاکید تلفظ کلمه در زبان انگلیسی بود. این تحقیق با پیاده کردن چهار متد ارائه اصلاح خطا در چهار گروه، سه گروه آزمایشی و یک گروه تحت کنترل، انجام شد که گروه های فوق الذکر شامل دانشجویان سطح بالای متوسط کتاب اول passages بودند. گروه اول شامل 15، دوم 14، سوم 15 و آخرین 16 دانشجو بودند. دوره مربوطه به مدت 10 هفته ادامه یافت و د...

15 صفحه اول

First-Order and Second-Order Conditions for Error Bounds

For a lower semicontinuous function f on a Banach space X, we study the existence of a positive scalar μ such that the distance function dS associated with the solution set S of f(x) ≤ 0 satisfies dS(x) ≤ μmax{f(x), 0} for each point x in a neighborhood of some point x0 in X with f(x) < for some 0 < ≤ +∞. We give several sufficient conditions for this in terms of an abstract subdifferential and...

متن کامل

From error bounds to the complexity of first-order descent methods for convex functions

This paper shows that error bounds can be used as e↵ective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the KurdykaLojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex functions having a moderately flat profile near t...

متن کامل

First-order Methods for Geodesically Convex Optimization

Geodesic convexity generalizes the notion of (vector space) convexity to nonlinear metric spaces. But unlike convex optimization, geodesically convex (g-convex) optimization is much less developed. In this paper we contribute to the understanding of g-convex optimization by developing iteration complexity analysis for several first-order algorithms on Hadamard manifolds. Specifically, we prove ...

متن کامل

\(\ell_{1, p}\)-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods

In recent years, the `1,p-regularizer has been widely used to induce structured sparsity in the solutions to various optimization problems. Currently, such `1,p-regularized problems are typically solved by first-order methods. Motivated by the desire to analyze the convergence rates of these methods, we show that for a large class of `1,p-regularized problems, an error bound condition is satisf...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Optimization Theory and Applications

سال: 2021

ISSN: ['0022-3239', '1573-2878']

DOI: https://doi.org/10.1007/s10957-021-01836-9